Experimental Comparison of Odometry Approaches
نویسندگان
چکیده
Odometry is an important input to robot navigation systems, and we are interested in the performance of vision-only techniques. In this paper we experimentally evaluate and compare the performance of wheel odometry, monocular feature-based visual odometry, monocular patch-based visual odometry, and a technique that fuses wheel odometry and visual odometry, on a mobile robot operating in a typical indoor environment. 1 Motivation and Problem Statement Fig. 1. The Adept Guiabot robot used in this paper. Today most research robots, the prototypes of future commercial systems, use laser rangefinders (LRFs) as the sensor that informs the essential parts of a navigation system: odometry, place recognition and mapping. LRFs are self-contained, reliable and provide metric information about the world, but since they are based on mature electro/optical/mechanical technologies will always be more expensive than cameras. They also only capture just a slice through the scene whereas a camera can instantaneously capture a view of a whole area which is important for fast moving robots. Imaging 3D sensors such as flash LIDAR (eg. Swiss Ranger) or Kinect technology are entirely solid-state but function poorly in the presence of daylight. Theoretically vision is able to perform the tasks of odometry and place recognition, and the sensors are in principle cheaper and smaller than LRFs. Vision has other advantages: it is a passive sensor and thus immune from the interference problems that will occur when multiple robots emitting infra-red light are operating in close proximity; it provides rich scene information such as color and texture which can assist semantic tagging; and when combined with motion it can generate 3D scene structure. However vision is rarely reported for indoor navigation and this paper details early results from a larger project aimed at developing a purely vision-based large-scale and long-term mobile robot navigation system that can work both indoors and outdoors. This paper is concerned specifically with maximizing the accuracy of odometry; we view odometry as an essential part of any navigation system and recognize that external mechanisms to “close-the-loop” can improve odometry performance significantly, but the focus here is on maximizing odometry performance in isolation. We compare the performance of several odometry approaches: raw wheel odometry, patch based monocular visual odometry (VO), state-of-the-art feature-based monocular VO, a method of fusing wheel odometry and VO using confidence metrics generated from the VO, and laser scan matching which we refer to as ground truth. The performance of these odometry techniques is evaluated in two distinct indoor environments that present a number of different challenges.
منابع مشابه
Evaluation of non-geometric methods for visual odometry
Visual Odometry (VO) is one of the fundamental building blocks of modern autonomous robot navigation and mapping. While most state-of-the-art techniques use geometrical methods for camera ego-motion estimation from optical flow vectors, in the last few years learning approaches have been proposed to solve this problem. These approaches are emerging and there is still much to explore. This work ...
متن کامل6D Visual Odometry with Dense Probabilistic Egomotion Estimation
We present a novel approach to 6D visual odometry for vehicles with calibrated stereo cameras. A dense probabilistic egomotion (5D) method is combined with robust stereo feature based approaches and Extended Kalman Filtering (EKF) techniques to provide high quality estimates of vehicle’s angular and linear velocities. Experimental results show that the proposed method compares favorably with st...
متن کاملImage Gradient-based Joint Direct Visual Odometry for Stereo Camera
Visual odometry is an important research problem for computer vision and robotics. In general, the feature-based visual odometry methods heavily rely on the accurate correspondences between local salient points, while the direct approaches could make full use of whole image and perform dense 3D reconstruction simultaneously. However, the direct visual odometry usually suffers from the drawback ...
متن کاملComparison and Fusion of Odometry and Gps with Linear Filtering for Outdoor Robot Navigation
The present paper deals with the comparison and fusion of odometry and absolute Global Positioning System (GPS) with complementary linear filtering for the navigation of an outdoor robot. This system was implemented on a home made mobile robot named Rover Autonomous Navigation Tool (R-ANT). Experimental results are presented, which allow to compare the two original measurements as well as the f...
متن کاملKintinuous: Spatially Extended KinectFusion
In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume du...
متن کامل